About the project

I’m very excited about the course! Going to learn something cool and start actively using Github. My Github page.


Linear regression analysis

First, let’s download the data, which represents the relationship between learning approaches and students’ achievements. It consists of 166 observations of 7 variables. Columns’ description:

learning2014 <- read.csv("/Users/anastasia/IODS-project/data/learning2014.csv")
dim(learning2014)
## [1] 166   7
str(learning2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...

Now we can conduct a graphical analysis:

library(GGally)
## Loading required package: ggplot2
library(ggplot2)

# creates pairs plot
p <- ggpairs(learning2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
p    

The very first line of the pairs plot above shows the histogram of gender distribution (dichotomous variable) and genderwise box plots where the ends of the box are the upper and lower quartiles, and the median is marked by a vertical line inside the box. The first column shows genderwise distributions of “age”, “attitude”, “deep”, “stra”, “surf” and “points”. Pink color refers to females, blue to males.
From the rest of the plot we can see:

The highest correlation coefficients are observed between:

Gender distribution is imbalanced: twice more females than males. Age distribution is severely skewed towards young ages. The rest of distributions are also skewed, but not severely, and mostly have two picks.
Now we conduct the linear regression analysis, where “points” is a dependent variable, and explanatory variables are represented by “attitude”, “stra” and “surf” – which have the highest correlation with our target.

# creates a regression model with multiple explanatory variables

my_model <- lm(points ~ attitude + stra + surf, data = learning2014)

# prints a summary of the model
summary(my_model)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

The summary of the model suggests the following interpretation:

However, p-values suggest that only “attitude” is significant in the model on 5% significance level. If you are not familiar with the p-values, they are used to determine statistical significance in a hypothesis test (in our case, if coefficients of linear regression are zero):

where true null stands for the situation when corresponding coefficient is zero (zero influence on “points”).
We fit another model without a “surf” variable:

# creates a regression model with multiple explanatory variables
my_model2 <- lm(points ~ attitude + stra, data = learning2014)

# prints a summary of the model
summary(my_model2)
## 
## Call:
## lm(formula = points ~ attitude + stra, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.6436  -3.3113   0.5575   3.7928  10.9295 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   8.9729     2.3959   3.745  0.00025 ***
## attitude      3.4658     0.5652   6.132 6.31e-09 ***
## stra          0.9137     0.5345   1.709  0.08927 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared:  0.2048, Adjusted R-squared:  0.1951 
## F-statistic: 20.99 on 2 and 163 DF,  p-value: 7.734e-09

Now both variables are significant on 1% significance level (p-values are less than 0.1). The new interpretation:

# set plots' locations
par(mfrow = c(2,2))
# creates diagnostic plots
plot(my_model, which=c(1, 2, 5))

The scatter plot of residuals vs fitted (top-left) illustrates that residuals are equally distributed around zero. Thus, the assumption of homoscedasticity is held: the variance around the regression line is the same for all values of the predictor variable “points”. Q-Q plot of the model residuals (top-right) provides a method to check if the normality of errors assumption (underlying the linear regression) is held. In our case it shows a very reasonable fit. The scatter plot of residuals vs leverage (bottom-left) illustrates the impact single observations have on the model. It’s clearly visible that there are 3 outliers: 35, 77 and 145. However, they don’t severely influence the regression line. We can conclude that our linear model fits the standards.


Logistic regression

Chapter description

The following chapter analyses alcohol consumption of students of two Portuguese schools. The data attributes include student grades, demographic, social and school related features, and it was collected by using school reports and questionnaires. The variables’ names can be found below and their detailed description here.The purpose of current analysis is to study the relationships between high/low alcohol consumption and some of the other variables in the data.

alc <- read.csv("/Users/anastasia/IODS-project/data/alc.csv")
dim(alc)
## [1] 382  35
colnames(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"
str(alc)
## 'data.frame':    382 obs. of  35 variables:
##  $ school    : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
##  $ sex       : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
##  $ age       : int  18 17 15 15 16 16 16 17 15 15 ...
##  $ address   : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
##  $ famsize   : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
##  $ Pstatus   : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
##  $ Medu      : int  4 1 1 4 3 4 2 4 3 3 ...
##  $ Fedu      : int  4 1 1 2 3 3 2 4 2 4 ...
##  $ Mjob      : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
##  $ Fjob      : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
##  $ reason    : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
##  $ nursery   : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
##  $ internet  : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
##  $ guardian  : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
##  $ traveltime: int  2 1 1 1 1 1 1 2 1 1 ...
##  $ studytime : int  2 2 2 3 2 2 2 2 2 2 ...
##  $ failures  : int  0 0 2 0 0 0 0 0 0 0 ...
##  $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
##  $ famsup    : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
##  $ paid      : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
##  $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
##  $ higher    : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
##  $ romantic  : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
##  $ famrel    : int  4 5 4 3 4 5 4 4 4 5 ...
##  $ freetime  : int  3 3 3 2 3 4 4 1 2 5 ...
##  $ goout     : int  4 3 2 2 2 2 4 4 2 1 ...
##  $ Dalc      : int  1 1 2 1 1 1 1 1 1 1 ...
##  $ Walc      : int  1 1 3 1 2 2 1 1 1 1 ...
##  $ health    : int  3 3 3 5 5 5 3 1 1 5 ...
##  $ absences  : int  5 3 8 1 2 8 0 4 0 0 ...
##  $ G1        : int  2 7 10 14 8 14 12 8 16 13 ...
##  $ G2        : int  8 8 10 14 12 14 12 9 17 14 ...
##  $ G3        : int  8 8 11 14 12 14 12 10 18 14 ...
##  $ alc_use   : num  1 1 2.5 1 1.5 1.5 1 1 1 1 ...
##  $ high_use  : logi  FALSE FALSE TRUE FALSE FALSE FALSE ...

Exploratory data analysis

For selecting some interesting variables for future analysis it’s helpful to first visualize them.

## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
## 
##     nasa
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
## Warning: attributes are not identical across measure variables;
## they will be dropped

library(corrr)
alc %>% select_if(is.numeric) %>% correlate() %>% focus(alc_use, Dalc, Walc)
## 
## Correlation method: 'pearson'
## Missing treated using: 'pairwise.complete.obs'
## # A tibble: 14 x 4
##    rowname     alc_use    Dalc     Walc
##    <chr>         <dbl>   <dbl>    <dbl>
##  1 age         0.162    0.134   0.157  
##  2 Medu        0.00924  0.0454 -0.0173 
##  3 Fedu        0.00859  0.0215 -0.00168
##  4 traveltime  0.163    0.160   0.140  
##  5 studytime  -0.246   -0.186  -0.251  
##  6 failures    0.185    0.162   0.174  
##  7 famrel     -0.121   -0.0930 -0.122  
##  8 freetime    0.178    0.197   0.138  
##  9 goout       0.387    0.268   0.411  
## 10 health      0.0779   0.0625  0.0768 
## 11 absences    0.215    0.174   0.210  
## 12 G1         -0.176   -0.169  -0.155  
## 13 G2         -0.159   -0.151  -0.140  
## 14 G3         -0.156   -0.159  -0.131

Hypothesis testing

Based on computed correlation coefficients and my personal reasoning I come up with the following hypothesis:

  1. Male schoolers consume more alcohol that female (sex)
  2. Alcohol consumption decreases grades (in fact it’s three variables: G1, G2, G3)
  3. Alcohol consumption increases number of school absences (absences)
  4. Going out with friends increases alcohol consumption (goout)

Now I’m going to visualize them one by one.There are 198 females and 184 males in our dataset, so it’s quite balanced. The following graphs suggest that in general female schoolers consume more alcohol than males, but when it comes to high alcohol consumption (\(\geq 3\)), males take the lead. So my hypothesis is only partially true.

Next we explore relationship between alcohol use and grades. First, I compute mean grade (average of G1, G2, G3).

alc$G <- (alc$G1 + alc$G2 + alc$G3)/3

Now I plot the relationship. The overall trend supports my hypothesis about negative relationship between alcohol use and grades.

I do the same for absences. The overall trend again proves my hypothesis: the higher alcohol consumption, the more absences.

Here it’s clearly seen how going out with friends increases alcohol consumption.

Building logistic regression model

I built a logistic regression model with the binary high/low alcohol consumption variable as the target and the following explanatory variables:

  • sex
  • G
  • absences
  • goout

This fitted model says that, holding G, absences and goout at a fixed value, the odds of high alcohol consumption for males (male = 1) over the odds of getting into an honors class for females (female = 0) is exp(0.98033) = 2.593. In other words, high alcohol consumption is 2.6 times more probable for males than for females. The coefficient for G says that, holding sex, absences and goout at a fixed value, we will see a 5% decrease in the odds of high alcohol consumption for a one-unit increase in grades (G) since exp(-0.05877) = 0.943. Holding sex, G and goout at a fixed value, we will see an 8% decrease in the odds of high alcohol consumption for a one-unit increase in absences since exp(0.08105) = 1.084. Holding sex, G and absences at a fixed value, we will see an 103% decrease in the odds of high alcohol consumption for a one-unit increase in goout since exp(0.70547) = 2.025.

my_model <- glm(high_use ~ sex + G + absences + goout, data = alc, family = "binomial")
summary(my_model)
## 
## Call:
## glm(formula = high_use ~ sex + G + absences + goout, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.9328  -0.8060  -0.5331   0.8238   2.4661  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -3.39721    0.74736  -4.546 5.48e-06 ***
## sexM         0.95272    0.25510   3.735 0.000188 ***
## G           -0.05877    0.04572  -1.286 0.198594    
## absences     0.08105    0.02236   3.625 0.000289 ***
## goout        0.70547    0.12070   5.845 5.07e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 386.09  on 377  degrees of freedom
## AIC: 396.09
## 
## Number of Fisher Scoring iterations: 4
coef(my_model)
## (Intercept)        sexM           G    absences       goout 
## -3.39721198  0.95271930 -0.05877270  0.08105372  0.70546954
odds <- coef(my_model) %>% exp
ci <- confint(my_model) %>% exp
## Waiting for profiling to be done...
cbind(odds, ci)
##                   odds       2.5 %    97.5 %
## (Intercept) 0.03346645 0.007372191 0.1390862
## sexM        2.59275054 1.581774204 4.3094277
## G           0.94292107 0.861487242 1.0310898
## absences    1.08442915 1.039156246 1.1357634
## goout       2.02479718 1.607264661 2.5826300

From both p-values of my regression model and confidence intervals for odds ratios it is obvious that variable G is not statistically significant (p-value = 0.2 and confidence interval includes one).

Predictive power

Firstly, I remove redundant G variable from my model.

my_model_new <- glm(high_use ~ sex + absences + goout, data = alc, family = "binomial")
summary(my_model_new)
## 
## Call:
## glm(formula = high_use ~ sex + absences + goout, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.7871  -0.8153  -0.5446   0.8357   2.4740  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -4.16317    0.47506  -8.764  < 2e-16 ***
## sexM         0.95872    0.25459   3.766 0.000166 ***
## absences     0.08418    0.02237   3.764 0.000167 ***
## goout        0.72981    0.11970   6.097 1.08e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 387.75  on 378  degrees of freedom
## AIC: 395.75
## 
## Number of Fisher Scoring iterations: 4

Cross tabulation of predictions versus the actual values:

probabilities <- predict(my_model_new, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)
select(alc, sex, G, absences, goout, high_use, probability, prediction) %>% tail(10)
##     sex         G absences goout high_use probability prediction
## 373   M  4.000000        0     2    FALSE  0.14869987      FALSE
## 374   M  4.666667        7     3     TRUE  0.39514446      FALSE
## 375   F 12.333333        1     3    FALSE  0.13129452      FALSE
## 376   F  7.000000        6     3    FALSE  0.18714923      FALSE
## 377   F  7.000000        2     2    FALSE  0.07342805      FALSE
## 378   F 11.666667        2     4    FALSE  0.25434555      FALSE
## 379   F  5.333333        2     2    FALSE  0.07342805      FALSE
## 380   F  6.666667        3     1    FALSE  0.03989428      FALSE
## 381   M 12.666667        4     5     TRUE  0.68596604       TRUE
## 382   M 10.666667        2     1     TRUE  0.09060457      FALSE
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   253   15
##    TRUE     65   49

As it can be seen from the table, my model correctly classified \(253+49=302\) observations and failed with \(65+15= 80\) observations. That means the train error is 0.21.

g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
g + geom_point()

table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.66230366 0.03926702 0.70157068
##    TRUE  0.17015707 0.12827225 0.29842932
##    Sum   0.83246073 0.16753927 1.00000000

Again the average number of incorrectly classified observations (train error).

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2094241

Cross-validation

library(boot)
set.seed(12345)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model, K = 10)
cv$delta[1]
## [1] 0.2094241

My model has better test set performance compared to that introduced in DataCamp (0.21 < 0.26).

Comparative analysis of different models

model1 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model1)

I’ll first exclude ‘higher’ variable since it has the highest p-value.

model2 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + internet + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model2)

Next, I exclude ‘internet’.

model3 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model3)

Exclude ‘schoolsup’.

model4 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + guardian + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model4)

Exclude ‘reason’.

model5 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + nursery + guardian + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model5)

Exclude ‘Medu’.

model6 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus  + Fedu + Mjob + Fjob + nursery + guardian + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model6)

Exclude ‘guardian’.

model7 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus  + Fedu + Mjob + Fjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model7)

Exclude ‘Fjob’.

model8 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus  + Fedu + Mjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model8)

Exclude G.

model9 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus  + Fedu + Mjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model9)

Exclude P-status.

model10 <- glm(high_use ~ school + sex + age + address + famsize  + Fedu + Mjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model10)

Exclude ‘school’.

model11 <- glm(high_use ~ sex + age + address + famsize  + Fedu + Mjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model11)

Exclude ‘Mjob’.

model12 <- glm(high_use ~ sex + age + address + famsize  + Fedu + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model12)

Exclude ‘age’.

model13 <- glm(high_use ~ sex + address + famsize  + Fedu + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model13)

Exclude ‘famsup’.

model14 <- glm(high_use ~ sex + address + famsize  + Fedu + nursery + traveltime + studytime + failures + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model14)

Exclude ‘freetime’.

model15 <- glm(high_use ~ sex + address + famsize  + Fedu + nursery + traveltime + studytime + failures + paid + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model15)

Exclude ‘Fedu’.

model16 <- glm(high_use ~ sex + address + famsize + nursery + traveltime + studytime + failures + paid + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model16)

Exclude ‘failures’.

model17 <- glm(high_use ~ sex + address + famsize + nursery + traveltime + studytime + paid + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model17)

Exclude ‘famsize’.

model18 <- glm(high_use ~ sex + address + nursery + traveltime + studytime + paid + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model18)

Exclude ‘romantic’.

model18 <- glm(high_use ~ sex + address + nursery + traveltime + studytime + paid + activities + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model18)

Exclude ‘nursery’.

model19 <- glm(high_use ~ sex + address + traveltime + studytime + paid + activities + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model19)

Exclude ‘traveltime’.

model20 <- glm(high_use ~ sex + address + studytime + paid + activities + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model20)

Exclude ‘health’.

model21 <- glm(high_use ~ sex + address + studytime + paid + activities + famrel + goout + absences, data = alc, family = "binomial")
# summary(model21)

Now all the variables in my model are statistically significant. Let’s calculate test errors using cross validation.

set.seed(12345)
cv1 <- cv.glm(data = alc, cost = loss_func, glmfit = model1, K = 10)
cv2 <- cv.glm(data = alc, cost = loss_func, glmfit = model2, K = 10)
cv3 <- cv.glm(data = alc, cost = loss_func, glmfit = model3, K = 10)
cv4 <- cv.glm(data = alc, cost = loss_func, glmfit = model4, K = 10)
cv5 <- cv.glm(data = alc, cost = loss_func, glmfit = model5, K = 10)
cv6 <- cv.glm(data = alc, cost = loss_func, glmfit = model6, K = 10)
cv7 <- cv.glm(data = alc, cost = loss_func, glmfit = model7, K = 10)
cv8 <- cv.glm(data = alc, cost = loss_func, glmfit = model8, K = 10)
cv9 <- cv.glm(data = alc, cost = loss_func, glmfit = model9, K = 10)
cv10 <- cv.glm(data = alc, cost = loss_func, glmfit = model10, K = 10)
cv11 <- cv.glm(data = alc, cost = loss_func, glmfit = model11, K = 10)
cv12 <- cv.glm(data = alc, cost = loss_func, glmfit = model12, K = 10)
cv13 <- cv.glm(data = alc, cost = loss_func, glmfit = model13, K = 10)
cv14 <- cv.glm(data = alc, cost = loss_func, glmfit = model14, K = 10)
cv15 <- cv.glm(data = alc, cost = loss_func, glmfit = model15, K = 10)
cv16 <- cv.glm(data = alc, cost = loss_func, glmfit = model16, K = 10)
cv17 <- cv.glm(data = alc, cost = loss_func, glmfit = model17, K = 10)
cv18 <- cv.glm(data = alc, cost = loss_func, glmfit = model18, K = 10)
cv18 <- cv.glm(data = alc, cost = loss_func, glmfit = model18, K = 10)
cv19 <- cv.glm(data = alc, cost = loss_func, glmfit = model19, K = 10)
cv20 <- cv.glm(data = alc, cost = loss_func, glmfit = model20, K = 10)
cv21 <- cv.glm(data = alc, cost = loss_func, glmfit = model21, K = 10)

test_errors <- c(cv1$delta[1], cv2$delta[1], cv3$delta[1], cv4$delta[1], cv5$delta[1], cv6$delta[1], cv7$delta[1], cv8$delta[1], cv9$delta[1], cv10$delta[1], cv11$delta[1], cv12$delta[1], cv13$delta[1], cv14$delta[1], cv15$delta[1], cv16$delta[1], cv17$delta[1], cv18$delta[1], cv19$delta[1], cv20$delta[1], cv21$delta[1])

And also train errors.

probabilities1 <- predict(model1, type = "response")
probability1 <- probabilities1 > 0.5
probabilities2 <- predict(model2, type = "response")
probabilities3 <- predict(model3, type = "response")
probabilities4 <- predict(model4, type = "response")
probabilities5 <- predict(model5, type = "response")
probabilities6 <- predict(model6, type = "response")
probabilities7 <- predict(model7, type = "response")
probabilities8 <- predict(model8, type = "response")
probabilities9 <- predict(model9, type = "response")
probabilities10 <- predict(model10, type = "response")
probabilities11 <- predict(model11, type = "response")
probabilities12 <- predict(model12, type = "response")
probabilities13 <- predict(model13, type = "response")
probabilities14 <- predict(model14, type = "response")
probabilities15 <- predict(model15, type = "response")
probabilities16 <- predict(model16, type = "response")
probabilities17 <- predict(model17, type = "response")
probabilities18 <- predict(model18, type = "response")
probabilities19 <- predict(model19, type = "response")
probabilities20 <- predict(model20, type = "response")
probabilities21 <- predict(model21, type = "response")
loss1 <- loss_func(class = alc$high_use, prob = probabilities1)
loss2 <- loss_func(class = alc$high_use, prob = probabilities2)
loss3 <- loss_func(class = alc$high_use, prob = probabilities3)
loss4 <- loss_func(class = alc$high_use, prob = probabilities4)
loss5 <- loss_func(class = alc$high_use, prob = probabilities5)
loss6 <- loss_func(class = alc$high_use, prob = probabilities6)
loss7 <- loss_func(class = alc$high_use, prob = probabilities7)
loss8 <- loss_func(class = alc$high_use, prob = probabilities8)
loss9 <- loss_func(class = alc$high_use, prob = probabilities9)
loss10 <- loss_func(class = alc$high_use, prob = probabilities10)
loss11 <- loss_func(class = alc$high_use, prob = probabilities11)
loss12 <- loss_func(class = alc$high_use, prob = probabilities12)
loss13 <- loss_func(class = alc$high_use, prob = probabilities13)
loss14 <- loss_func(class = alc$high_use, prob = probabilities14)
loss15 <- loss_func(class = alc$high_use, prob = probabilities15)
loss16 <- loss_func(class = alc$high_use, prob = probabilities16)
loss17 <- loss_func(class = alc$high_use, prob = probabilities17)
loss18 <- loss_func(class = alc$high_use, prob = probabilities18)
loss19 <- loss_func(class = alc$high_use, prob = probabilities19)
loss20 <- loss_func(class = alc$high_use, prob = probabilities20)
loss21 <- loss_func(class = alc$high_use, prob = probabilities21)

train_errors <- c(loss1, loss2, loss3, loss4, loss5, loss6, loss7, loss8, loss9, loss10, loss11, loss12, loss13, loss14, loss15, loss16, loss17, loss18, loss19, loss20, loss21)
vars <- seq(from = 1, to = 21)
rates <- seq(from = 15, to = 25)

Finally, let’s plot it. Train errors are in red, and test errors are in blue. It’s clearly seen, that since the number of explanatory variables in a model decreases, train error decreases (since at the beginning there were too many redundant variables) and test error is more or less the same here. In general, the more variables, the lower train error and the higher test error due to overfitting.

errors <- data_frame(train_errors, test_errors)
## Warning: `data_frame()` is deprecated, use `tibble()`.
## This warning is displayed once per session.
p = ggplot() + 
  geom_line(data = errors, aes(x=vars, y = train_errors), color = "blue") + 
  geom_line(data = errors, aes(x=vars, y = test_errors), color = "red") +
  xlab('Models') +
  ylab('Error rates')
p


Clustering and classification

Data description

For this week analysis I use Boston dataset from the MASS package. It contains housing values in suburbs of Boston and consists of 506 observations of the following 14 variables:

  1. crim – per capita crime rate by town
  2. zn – proportion of residential land zoned for lots over 25,000 sq.ft.
  3. indus – proportion of non-retail business acres per town
  4. chas – Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
  5. nox – nitrogen oxides concentration (parts per 10 million)
  6. rm – average number of rooms per dwelling
  7. age – proportion of owner-occupied units built prior to 1940
  8. dis – weighted mean of distances to five Boston employment centres
  9. rad – index of accessibility to radial highways
  10. tax – full-value property-tax rate per $10,000
  11. ptratio – pupil-teacher ratio by town
  12. black – 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
  13. lstat – lower status of the population (percent)
  14. medv – median value of owner-occupied homes in $1000s

Exploratory analysis

First, let’s explore the dataset a bit:

library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
data("Boston")
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14

Let’s have a closer look at variables and their distributions.

##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

We can check relationship between variables using a separate correlation plot:

## ── Attaching packages ─────────────────────────────────────────────────────────────────────── tidyverse 1.2.1 ──
## ✔ tibble  2.1.1     ✔ purrr   0.3.2
## ✔ readr   1.3.1     ✔ stringr 1.4.0
## ✔ tibble  2.1.1     ✔ forcats 0.4.0
## ── Conflicts ────────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ✖ MASS::select()  masks dplyr::select()
## corrplot 0.84 loaded
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47
##         ptratio black lstat  medv
## crim       0.29 -0.39  0.46 -0.39
## zn        -0.39  0.18 -0.41  0.36
## indus      0.38 -0.36  0.60 -0.48
## chas      -0.12  0.05 -0.05  0.18
## nox        0.19 -0.38  0.59 -0.43
## rm        -0.36  0.13 -0.61  0.70
## age        0.26 -0.27  0.60 -0.38
## dis       -0.23  0.29 -0.50  0.25
## rad        0.46 -0.44  0.49 -0.38
## tax        0.46 -0.44  0.54 -0.47
## ptratio    1.00 -0.18  0.37 -0.51
## black     -0.18  1.00 -0.37  0.33
## lstat      0.37 -0.37  1.00 -0.74
## medv      -0.51  0.33 -0.74  1.00

The highest correlations are observed between:

  • proportion of non-retail business acres (indus) and distances to Boston employment centresdis (dis) are strongly negatively correlated: -0.71
  • nitrogen oxides concentration (nox) and distances to Boston employment centresdis (dis) are strongly negatively correlated: -0.77
  • proportion of owner-occupied units built prior to 1940 (age) and distances to Boston employment centresdis (dis) are strongly negatively correlated: -0.75
  • lower status of the population (lstat) and median value of owner-occupied homes (medv) are strongly negatively correlated: -0.74
  • index of accessibility to radial highways (rad) and full-value property-tax rate (tax) are strongly positively correlated: 0.91
  • index of accessibility to radial highways (rad) and full-value property-tax rate (tax) are strongly positively correlated: 0.91
  • proportion of non-retail business acres (indus) and nitrogen oxides concentration (nox) are strongly positively correlated: 0.76
  • proportion of non-retail business acres (indus) and full-value property-tax rate (tax) are strongly positively correlated: 0.72
  • nitrogen oxides concentration (nox) and proportion of owner-occupied units built prior to 1940 (age) are strongly positively correlated: 0.73
  • owner-occupied homes (medv) and average number of rooms per dwelling (rm) are strongly positively correlated: 0.70

Data wrangling

Since my variables are measured on different scales, for further analysis I have to scale the data, subtracting the column means from the corresponding columns and dividing the difference with standard deviation.

boston_scaled <- scale(Boston)
summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865
boston_scaled <- as.data.frame(boston_scaled)

As it can be seen, all the variables’ means are zeros now after scaling.

I also create a factor variable for numerical crim, categorizing it into high, low and middle rates of crime.

bins <- quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127

I remove the initial variable crim and add the new categorical one.

boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)

I split my data into test (20%) and train (80%) sets in order to assess the model’s (which I’m going to build) quality. The training of the model will be done with the train set and prediction on new data is done with the test set.

n <- nrow(boston_scaled)
set.seed(12345)
ind <- sample(n,  size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]

Linear discriminant analysis (LDA)

lda_model <- lda(crime~., data=train)
lda_model
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2475248 0.2574257 0.2475248 0.2475248 
## 
## Group means:
##                   zn      indus         chas        nox         rm
## low       0.97229754 -0.9597872 -0.154216061 -0.8889207  0.4570563
## med_low  -0.08464861 -0.3244885 -0.007331936 -0.5892641 -0.1082762
## med_high -0.39119540  0.1704608  0.200122961  0.3770782  0.1375651
## high     -0.48724019  1.0171519 -0.075474056  1.0547756 -0.5001506
##                 age        dis        rad        tax     ptratio
## low      -0.8780782  0.9035281 -0.6913090 -0.7279533 -0.39471367
## med_low  -0.3707589  0.3894044 -0.5434660 -0.5306788 -0.04341686
## med_high  0.4098247 -0.3529031 -0.4041923 -0.3146912 -0.31203261
## high      0.8165907 -0.8472627  1.6377820  1.5138081  0.78037363
##               black      lstat        medv
## low       0.3857136 -0.7891212  0.51279116
## med_low   0.3160280 -0.1608905  0.01640163
## med_high  0.0563347 -0.0252292  0.17942536
## high     -0.6736089  0.9100659 -0.66442722
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.06453072  0.74240382 -0.88262071
## indus    0.06465153 -0.34822618  0.43060847
## chas    -0.07437636 -0.09918695  0.12926793
## nox      0.32119190 -0.73282709 -1.30214445
## rm      -0.12621343 -0.11008660 -0.18940456
## age      0.24857359 -0.28701321 -0.33705453
## dis     -0.06379877 -0.31544376  0.10788004
## rad      3.04902524  0.98466813  0.09844024
## tax      0.12625765 -0.05144994  0.20511932
## ptratio  0.09254812  0.02946943 -0.18922812
## black   -0.09973884  0.03443536  0.12859239
## lstat    0.21995230 -0.17129701  0.51685693
## medv     0.22046620 -0.39973922 -0.12354310
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9459 0.0403 0.0139

Prior probabilities are just equal proportions of four groups (1/4). Coefficients mean that the first discriminant function (LD1) is a linear combination of the variables: \(0.065∗zn+0.065∗indus⋯+0.22∗medv\). Proportion of trace is the between group variance. Linear discriminant 1 explains almost 95% of between group variance.

Let’s draw the LDA biplot:

The most influential linear separators for the clusters are rad, zn and nox. I save the correct classes from the test data set, and then remove them from the data frame itself, since I’m going to test my model on it. So this information about correct classification must not be there.

correct_classes <- test$crime
test <- dplyr::select(test, -crime)

Now I will make predictions based on a model:

lda.pred <- predict(lda_model, newdata = test)

And check the quality of prediction with cross-tabulation:

table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       14      12        1    0
##   med_low    1      17        4    0
##   med_high   0       7       18    1
##   high       0       0        0   27

It can be seen that the model successfully predicts low and high, but fails with middle rates of crime, since they are probably less separable from each other. It also can be clearly visible from the biplot, that green (med_high) and red (med_low) severely clash.

k-means

boston_scaled2 <- scale(Boston)
boston_scaled2 <- as.data.frame(boston_scaled2)

For calculating between the observations I will use the most common Euclidean method.

dist_eu <- dist(boston_scaled2, method = "euclidean")
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970

Now I implement k-means algorithm. To determine the optimal number of clusters let’s look at the total of within cluster sum of squares (WCSS) The optimal number of clusters is when the total WCSS drops radically, thus it is 2.

set.seed(123)
# max number of clusters
k_max <- 10
# the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled2, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')

km <- kmeans(boston_scaled2, centers = 2)
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

Looking at the plot we can see, that in many variables classes are separable indeed, especially when looking at distributions and correlation coefficients (which vary for two classes). Among the most visible and distinguishable differences between classes are:

  • indus
  • nox
  • age
  • dis
  • rad
  • tax
  • ptratio

Bonus

Let’s perform k-means with 2 clusters.

km_new <- kmeans(boston_scaled2, centers = 3)

I perform LDA using the clusters as target classes.

new_data <- dplyr::select(boston_scaled2, -crim)
new_data <- data.frame(new_data, km_new$cluster)
set.seed(12345)
train_new <- new_data[ind,]
test_new <- new_data[-ind,]
lda_model_new <- lda(km_new.cluster~., data=train_new)
lda_model_new
## Call:
## lda(km_new.cluster ~ ., data = train_new)
## 
## Prior probabilities of groups:
##         1         2         3 
## 0.2896040 0.4331683 0.2772277 
## 
## Group means:
##           zn      indus        chas        nox         rm        age
## 1 -0.4872402  1.0650531 -0.03677606  1.1318802 -0.5060351  0.7835135
## 2 -0.3992808 -0.1386820  0.08763438 -0.1782981 -0.1640318  0.1855020
## 3  1.1380713 -0.9938046 -0.13171834 -0.9662319  0.7687324 -1.1415994
##           dis        rad        tax     ptratio      black       lstat
## 1 -0.84490528  1.4132224  1.3907131  0.61746316 -0.6404477  0.91465920
## 2 -0.06275337 -0.5802359 -0.5441956 -0.04043224  0.2486883 -0.05609896
## 3  1.07741104 -0.5901619 -0.6745844 -0.55643005  0.3671677 -0.93177559
##          medv
## 1 -0.67072240
## 2 -0.06706528
## 3  0.84549682
## 
## Coefficients of linear discriminants:
##                  LD1         LD2
## zn       0.043309481  0.83222632
## indus   -0.272397029  0.01729030
## chas     0.004084862 -0.17907457
## nox     -0.795442248  0.46890783
## rm       0.110569407  0.39122239
## age      0.082133532 -0.93047496
## dis      0.051331466  0.31679703
## rad     -1.526920272  0.77083066
## tax     -0.858771883  0.18114143
## ptratio -0.055424237  0.01223941
## black    0.042493264 -0.02976446
## lstat   -0.376381397  0.43131410
## medv     0.004158310  0.53794043
## 
## Proportion of trace:
##    LD1    LD2 
## 0.8753 0.1247

Coefficients mean that the first discriminant function (LD1) is a linear combination of the variables: \(0.043∗zn-0.27∗indus⋯+0.004∗medv\) etc. Let’s plot:

classes_new <- as.numeric(train_new$km_new.cluster)
plot(lda_model_new, dimen = 2, col = classes_new, pch = classes_new)
lda.arrows(lda_model_new, myscale = 1)

This time the most influential linear separators for the clusters are rad, age and zn.

correct_classes_new <- test_new$km_new.cluster
test_new <- dplyr::select(test_new, -km_new.cluster)

Super bonus

model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda_model$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda_model$scaling
matrix_product <- as.data.frame(matrix_product)

Plotly graph:

## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
km_cluster <- as.data.frame(km$cluster)
km_set <- km_cluster[ind,]

Firstly, the number of groups is different, since for k-means I set two clusters only. But what we can see is that one cluster distinguishes a lot, and the rest of observations are less separable.


Dimensionality reduction techniques

Note, that I put data wrangling in Rmarkdown file not by mistake. I just think that data aggregation process in this assignment is quite complicated, and since we assume the reader has no previous knowledge of it, I want to make it clearer and display here. 😇 🤓

Data description

This week I use ‘human’ data which describes the development of countries taking people’s capabilities and gender inequality into account. Variables description:

  • hdi_rank – country rank according to hdi
  • country – country name
  • life_exp – life expectancy at birth
  • exp_edu – expected years of schooling for children of school entering age
  • mean_edu – mean of years of schooling for adults aged 25 years and more
  • gni – gross national income per capita
  • hdi – The Human Development Index, a summary measure of average achievement in key dimensions of human development: a long and healthy life, being knowledgeable and have a decent standard of living (the geometric mean of normalized indices for each of the three dimensions)
  • gii_rank – country rank according to gii
  • gii – The Gender Inequality Index, a summary measure of reproductive health, empowerment and labour market characteristics (the geometric mean of normalized indices for each of the three dimensions)
  • mm_ratio – maternal mortality ratio
  • adol_birth – adolescent birth rates
  • parliament – proportion of parliamentary seats occupied by females
  • edu_fem – proportion of adult females aged 25 years and older with at least some secondary education
  • edu_m – proportion of adult males aged 25 years and older with at least some secondary education
  • labour_fem – labour force participation rate of female population aged 15 years and older
  • labour_m – labour force participation rate of male population aged 15 years and older
  • edu_ratioedu_fem and edu_m ratio
  • labour_ratiolabour_fem and labour_m ratio

Here’s the illustration:

Now we transform the data a bit:

library(dplyr)
library(stringr)
human <- read.csv("~/IODS-project/data/human.csv")
# transform gni to numeric
human$gni <- str_replace(human$gni, pattern=",", replace ="") %>% as.numeric
human$gni <- as.numeric(human$gni)
# exclude unneeded variables
hvars <- names(human) %in% c("country", "edu_ratio", "labour_ratio", "exp_edu", "life_exp", "gni", "mm_ratio", "adol_birth", "parliament")
human <- human[hvars]
human <- na.omit(human)
# removimg last 6 observations which relate to regions instead of countries (World, Sub-Saharan Africa, South Asia, Latin America and the Caribbean, Europe and Central Asia, East Asia and the Pacific)
last <- nrow(human) - 7
human <- human[1:last, ]
# adding countries as rownames
rownames(human) <- human$country
human <- dplyr::select(human, -country)

Exploratory analysis

Let’s first visualize ‘human’ and check correlations.

##              life_exp exp_edu   gni mm_ratio adol_birth parliament
## life_exp         1.00    0.79  0.63    -0.86      -0.73       0.17
## exp_edu          0.79    1.00  0.62    -0.74      -0.70       0.21
## gni              0.63    0.62  1.00    -0.50      -0.56       0.09
## mm_ratio        -0.86   -0.74 -0.50     1.00       0.76      -0.09
## adol_birth      -0.73   -0.70 -0.56     0.76       1.00      -0.07
## parliament       0.17    0.21  0.09    -0.09      -0.07       1.00
## edu_ratio        0.58    0.59  0.43    -0.66      -0.53       0.08
## labour_ratio    -0.14    0.05 -0.02     0.24       0.12       0.25
##              edu_ratio labour_ratio
## life_exp          0.58        -0.14
## exp_edu           0.59         0.05
## gni               0.43        -0.02
## mm_ratio         -0.66         0.24
## adol_birth       -0.53         0.12
## parliament        0.08         0.25
## edu_ratio         1.00         0.01
## labour_ratio      0.01         1.00

We can see that the highest correlations are observed between:

  • life expectancy at birth and maternal mortality ratio: -0.86
  • life expectancy at birth and expected years of education: 0.79
  • maternal mortality ratio and adolescent birth rates: 0.76
  • expected years of education and maternal mortality ratio: -0.74
  • life expectancy at birth and adolescent birth rates: -0.73
  • expected years of education and adolescent birth rates: -0.7

Most of the distributions are skewed, only expected years of education variable is close to normal distribution.

Principal component analysis (PCA)

Let’s perform PCA and visualize components.

# performing principal component analysis (with the SVD method)
pca_human <- prcomp(human)
# drawing a biplot of the principal component representation and the original variables
pca_human
## Standard deviations (1, .., p=8):
## [1] 1.854416e+04 1.855219e+02 2.518701e+01 1.145441e+01 3.766241e+00
## [6] 1.565912e+00 1.912052e-01 1.591112e-01
## 
## Rotation (n x k) = (8 x 8):
##                        PC1           PC2           PC3           PC4
## life_exp     -2.815823e-04 -0.0283150248 -1.294971e-02  6.752684e-02
## exp_edu      -9.562910e-05 -0.0075529759 -1.427664e-02  3.313505e-02
## gni          -9.999832e-01  0.0057723054  5.156742e-04 -4.932889e-05
## mm_ratio      5.655734e-03  0.9916320120 -1.260302e-01  6.100534e-03
## adol_birth    1.233961e-03  0.1255502723  9.918113e-01 -5.301595e-03
## parliament   -5.526460e-05 -0.0032317269  7.398331e-03  9.971232e-01
## edu_ratio    -5.607472e-06 -0.0006713951  3.412027e-05  2.736326e-04
## labour_ratio  2.331945e-07  0.0002819357 -5.302884e-04  4.692578e-03
##                        PC5           PC6           PC7           PC8
## life_exp      0.9865644425  1.453515e-01 -5.380452e-03  2.281723e-03
## exp_edu       0.1431180282 -9.882477e-01  3.826887e-02  7.776451e-03
## gni          -0.0001135863  2.711698e-05  8.075191e-07 -1.176762e-06
## mm_ratio      0.0266373214 -1.695203e-03 -1.355518e-04  8.371934e-04
## adol_birth    0.0188618600 -1.273198e-02  8.641234e-05 -1.707885e-04
## parliament   -0.0716401914  2.309896e-02  2.642548e-03  2.680113e-03
## edu_ratio    -0.0022935252 -2.180183e-02 -6.998623e-01  7.139410e-01
## labour_ratio  0.0022190154 -3.264423e-02 -7.132267e-01 -7.001533e-01
biplot(pca_human, choices = 1:2, cex = c(0.8, 0.8), col = c("grey40", "deeppink2"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

Hardly any inferences can be made out of this mess. Let’s scale data.

# scaling the variables
human_std <- scale(human)
summary(human_std)
##     life_exp          exp_edu             gni             mm_ratio      
##  Min.   :-2.7188   Min.   :-2.7378   Min.   :-0.9193   Min.   :-0.6992  
##  1st Qu.:-0.6425   1st Qu.:-0.6782   1st Qu.:-0.7243   1st Qu.:-0.6496  
##  Median : 0.3056   Median : 0.1140   Median :-0.3013   Median :-0.4726  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6717   3rd Qu.: 0.7126   3rd Qu.: 0.3712   3rd Qu.: 0.1932  
##  Max.   : 1.4218   Max.   : 2.4730   Max.   : 5.6890   Max.   : 4.4899  
##    adol_birth        parliament        edu_ratio        labour_ratio    
##  Min.   :-1.1325   Min.   :-1.8203   Min.   :-2.8189   Min.   :-2.6247  
##  1st Qu.:-0.8394   1st Qu.:-0.7409   1st Qu.:-0.5233   1st Qu.:-0.5484  
##  Median :-0.3298   Median :-0.1403   Median : 0.3503   Median : 0.2316  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6030   3rd Qu.: 0.6127   3rd Qu.: 0.5958   3rd Qu.: 0.7350  
##  Max.   : 3.8344   Max.   : 3.1850   Max.   : 2.6646   Max.   : 1.6632

After scaling all the variables’ means are zeros.

Now, let’s perform PCA on standardized data and visualize components.

# performing principal component analysis (with the SVD method)
pca_human_st <- prcomp(human_std)
pca_human_st
## Standard deviations (1, .., p=8):
## [1] 2.0708380 1.1397204 0.8750485 0.7788630 0.6619563 0.5363061 0.4589994
## [8] 0.3222406
## 
## Rotation (n x k) = (8 x 8):
##                      PC1         PC2         PC3         PC4        PC5
## life_exp     -0.44372240  0.02530473 -0.10991305  0.05834819 -0.1628935
## exp_edu      -0.42766720 -0.13940571  0.07340270  0.07020294 -0.1659678
## gni          -0.35048295 -0.05060876  0.20168779  0.72727675  0.4950306
## mm_ratio      0.43697098 -0.14508727  0.12522539  0.25170614  0.1800657
## adol_birth    0.41126010 -0.07708468 -0.01968243 -0.04986763  0.4672068
## parliament   -0.08438558 -0.65136866 -0.72506309 -0.01396293  0.1523699
## edu_ratio    -0.35664370 -0.03796058  0.24223089 -0.62678110  0.5983585
## labour_ratio  0.05457785 -0.72432726  0.58428770 -0.06199424 -0.2625067
##                      PC6         PC7         PC8
## life_exp      0.42242796 -0.43406432 -0.62737008
## exp_edu       0.38606919  0.77962966  0.05415984
## gni          -0.11120305 -0.13711838  0.16961173
## mm_ratio     -0.17370039  0.35380306 -0.72193946
## adol_birth    0.76056557 -0.06897064  0.14335186
## parliament   -0.13749772  0.00568387  0.02306476
## edu_ratio    -0.17713316  0.05773644 -0.16459453
## labour_ratio  0.03500707 -0.22729927  0.07304568
# drawing a biplot of the principal component representation and the original variables
biplot(pca_human_st, choices = 1:2, cex = c(0.5, 0.5), col = c("grey40", "deeppink2"))

Arrows visualize the relationship between the original features and principal components. We now discuss both graphs.

  1. maternal mortality ratio and adolescent birth rates are pointing the same direction: they are highly positively correlated. At the same time they are highly negatively correlated with the rest of analyzed variables. gross national income has the highest standard deviation (long arrow). adolescent birth rates, maternal mortality ratio, female/male labour market participation ratio contribute to the first principal component (PC1), whereas life expectancy, gross national income, expected years of schooling, proportion of parliamentary seats occupied by females, female/male mean years of education contribute to the second principal component (PC2). (can be seen from PCA result table)
  2. We can see that life expectancy, female/male mean years of education, expected years of schooling and gross national income are highly positively correlated (small angles between the arrows), same for adolescent birth rates and maternal mortality ratio. proportion of parliamentary seats occupied by females and female/male labour market participation ratio are also positively correlated with each other. Gross national income and expected years of schooling have the smallest standard deviations out of all (short arrows). Contributions are still the same, although their weights are now different. adolescent birth rates and maternal mortality ratio bring the most into PC1, life expectancy and expected years of schooling bring the most into PC2.

So, the results are different as a consequence of unscaled data measured on different scales. When we standardize it, we bring their standard deviations on the same scale, and get different numerical result that is more accurate. I highlighted numerical, since formation of components is, in fact, the same.

Main insights from my perspective We can conclude that gross national income has little to do with essential health variables such as maternal mortality ratio, adolescent birth rates and life expectancy. Our first principal component clearly distinguishes low-developed countries by adolescent birth rates and maternal mortality ratio (on the graph we can see Mozambique, Rwanda, Tanzania, Sierra Leone etc.). In those countries both maternal mortality ratios and adolescent birth rates are usually high. Our second component then gathers the rest of the countries, more developed ones.

Multiple Correspondence Analysis

Multiple Correspondence Analysis (MCA) is a method to analyze qualitative data. It can be used to detect patterns or structure in the data as well as in dimension reduction.

Let’s download the dataset about tea consumption.

library(FactoMineR)
data(tea)
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
# new dataset with selected columns
tea_time <- select(tea, one_of(keep_columns))

Let’s check summaries and structure:

summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
str(tea_time)
## 'data.frame':    300 obs. of  6 variables:
##  $ Tea  : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How  : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ how  : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...

Now I visualize the dataset:

## Warning: attributes are not identical across measure variables;
## they will be dropped

Performing MCA:

mca <- MCA(tea_time, graph = FALSE)
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.279   0.261   0.219   0.189   0.177   0.156
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.144   0.141   0.117   0.087   0.062
## % of var.              7.841   7.705   6.392   4.724   3.385
## Cumulative % of var.  77.794  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898
##                       cos2  v.test     Dim.3     ctr    cos2  v.test  
## black                0.003   0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            0.027   2.867 |   0.433   9.160   0.338  10.053 |
## green                0.107  -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone                0.127  -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                0.035   3.226 |   1.329  14.771   0.218   8.081 |
## milk                 0.020   2.422 |   0.013   0.003   0.000   0.116 |
## other                0.102   5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag              0.161  -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged   0.478  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged           0.141  -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |

From the last table we can see that variables ‘how’ and ‘where’ are highly correlated with the 1st dimension. They are also correlated with the 2d dimension.

And visualize MCA by categories:

by individuals:

From the factor map with 2 dimensions we can see that ‘how’ and ‘where’ are two the most similar categories. There is a clear pattern that people who buy tea in chain stores drink tea bags; those who buy in tea shops - unpacked; and those who buy tea either in chain stores or in tea shops (surprise surprise) - both tea bags and unpacked. Also we can notice that people drink green tea just alone, and add milk and lemon primarily in black tea.

Dimension 1 explains 15% of variance, and dimension 2 - 14%.